practical no-box adversarial attack
- North America > United States > California > Yolo County > Davis (0.05)
- North America > Canada (0.05)
- Information Technology > Security & Privacy (0.41)
- Government > Military (0.41)
Review for NeurIPS paper: Practical No-box Adversarial Attacks against DNNs
I wouldn't say this is a weakness but it would be good to have some works on adversarial attacks on auto-encoders and image translation networks cited in the paper. All of these works have the defining characteristic of attacking networks that have an image as an input and and image as an output and that the attacks are adapted from attacks such as FGSM, I-FGSM and PGD. This is related to the idea of the attack presented, since it is generated on an auto-encoder and transferred to a target model. None of these mentioned works diminish the novelty of the paper's ideas, they are just related. But that is just me.
- Information Technology > Security & Privacy (0.96)
- Government > Military (0.66)
Practical No-box Adversarial Attacks against DNNs
The study of adversarial vulnerabilities of deep neural networks (DNNs) has progressed rapidly. Existing attacks require either internal access (to the architecture, parameters, or training set of the victim model) or external access (to query the model). However, both the access may be infeasible or expensive in many scenarios. We investigate no-box adversarial examples, where the attacker can neither access the model information or the training set nor query the model. Instead, the attacker can only gather a small number of examples from the same problem domain as that of the victim model.
- Information Technology > Security & Privacy (0.44)
- Government > Military (0.44)